<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>8JMKD3MGPBW34M/3869EA2</identifier>
		<repository>sid.inpe.br/sibgrapi/2010/08.28.18.54</repository>
		<lastupdate>2010:08.28.18.54.16 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/sibgrapi/2010/08.28.18.54.16</metadatarepository>
		<metadatalastupdate>2022:06.14.00.06.50 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2010}</metadatalastupdate>
		<doi>10.1109/SIBGRAPI.2010.31</doi>
		<citationkey>CamaraNetoCamp:2010:ImImFe</citationkey>
		<title>On the Improvement of Image Feature Matching under Perspective Transformations</title>
		<format>Printed, On-line.</format>
		<year>2010</year>
		<numberoffiles>1</numberoffiles>
		<size>2961 KiB</size>
		<author>Camara Neto, Vilar Fiuza da,</author>
		<author>Campos, Mario Fernando Montenegro,</author>
		<affiliation>Fundação Centro de Análise, Pesquisa e Inovação Tecnológica - FUCAPI</affiliation>
		<affiliation>Universidade Federal de Minas Gerais</affiliation>
		<editor>Bellon, Olga,</editor>
		<editor>Esperança, Claudio,</editor>
		<e-mailaddress>neto@dcc.ufmg.br</e-mailaddress>
		<conferencename>Conference on Graphics, Patterns and Images, 23 (SIBGRAPI)</conferencename>
		<conferencelocation>Gramado, RS, Brazil</conferencelocation>
		<date>30 Aug.-3 Sep. 2010</date>
		<publisher>IEEE Computer Society</publisher>
		<publisheraddress>Los Alamitos</publisheraddress>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Full Paper</tertiarytype>
		<transferableflag>1</transferableflag>
		<versiontype>finaldraft</versiontype>
		<keywords>visual features correspondence, image matching.</keywords>
		<abstract>This paper presents a novel methodology to perform consistent matching between visual features of a pair of images, particularly in the case of point-of-view changes between shots. Traditionally, such correspondences are determined by computing the similarity between descriptor vectors associated with each point which are obtained by invariant descriptors. Our methodology first obtains a coarse global registration among images, which constrains the correspondence space. Then, it analyzes the similarity among descriptors, thus reducing both the number and the severity of mismatches. The approach is sufficiently generic to be used with many feature descriptor methods. We present several experimental results that show significant increase both in accuracy and the number of successful matches.</abstract>
		<language>en</language>
		<targetfile>sibgrapi2010vilarnt-matching.pdf</targetfile>
		<usergroup>neto@dcc.ufmg.br</usergroup>
		<visibility>shown</visibility>
		<nexthigherunit>8JMKD3MGPEW34M/46SJT6B</nexthigherunit>
		<nexthigherunit>8JMKD3MGPEW34M/4742MCS</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2022/05.14.20.21 7</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2010/08.28.18.54</url>
	</metadata>
</metadatalist>